skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Tiede, Lydia"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Trust in elections is paramount for a democracy and citizens are more likely to cast ballots and support election results when they perceive election processes as trustworthy. However, with the advent of greater dependence on algorithms in election processes, we ask does a reliance on algorithms or a hybrid system for verifying signatures allay or increase citizens’ confidence in using them in elections? To answer this, we use unique survey experiments to first determine respondents’ comfort level in using such systems in elections and then to assess the circumstances which bound this trust. We find that respondents similarly trust automated and non-automated systems, but do not have a clear conception of the confidence threshold, set by policymakers, necessary for rejecting ballots. Additionally, respondents blame election officials more than algorithms when mistakes are made, although this result is contingent on the type of error and respondents’ partisanship. These results have significant implications for confidence in signature verification and other election processes that rely on artificial intelligence. 
    more » « less
  2. AI algorithms are increasingly influencing decision-making in criminal justice, including tasks such as predicting recidivism and identifying suspects by their facial features. The increasing reliance on machine-assisted legal decision-making impacts the rights of criminal defendants, the work of law enforcement agents, the legal strategies taken by attorneys, the decisions made by judges, and the public’s trust in courts. As such, it is crucial to understand how the use of AI is perceived by the professionals who interact with algorithms. The analysis explores the connection between law enforcement and legal professionals’ stated and behavioral trust. Results from three rigorous survey experiments suggest that law enforcement and legal professionals express skepticism about algorithms but demonstrate a willingness to integrate their recommendations into their own decisions and, thus, do not exhibit “algorithm aversion.” These findings suggest that there could be a tendency towards increased reliance on machine-assisted legal decision-making despite concerns about the impact of AI on the rights of criminal defendants. 
    more » « less